191 research outputs found

    On the role of the upper part of words in lexical access : evidence with masked priming

    Get PDF
    More than 100 years ago, Huey (1908) indicated that the upper part of words was more relevant for perception than the lower part. Here we examined whether mutilated words, in their upper/lower portions (e.g., , , , ), can automatically access their word units in the mental lexicon. To that end, we conducted four masked repetition priming experiments with the lexical decision task. Results showed that mutilated primes produced a sizeable masked repetition priming effect. Furthermore, the magnitude of the masked repetition priming effect was greater when the upper part of the primes was preserved than when the lower portion was preserved –this was the case not only when the mutilated words were presented in lowercase but also when the mutilated words were presented in uppercase. Taken together, these findings suggest that the front-end of computational models of visual-word recognition should be modified to provide a more realistic account at the level of letter features.The research reported in this article has been partially supported by Grant PSI2008-04069/PSIC and CONSOLIDER-INGENIO2010 CSD2008-00048 from the Spanish Ministry of Science and Innovation and by Grant PTDC/PSI-PCO/104671/2008 from the Portuguese Foundation for Science and Technology

    TAPCHA: An Invisible CAPTCHA Scheme

    Get PDF
    TAPCHA is a universal CAPTCHA scheme designed for touch-enabled smart devices such as smartphones, tablets and smartwatches. The main difference between TAPCHA and other CAPTCHA schemes is that TAPCHA retains its security by making the CAPTCHA test ‘invisible’ for the bot. It then utilises context effects to maintain the readability of the instruction for human users which eventually guarantees the usability of the scheme. Two reference designs, namely TAPCHA SHAPE & SHADE and TAPCHA MULTI are developed to demonstrate the use of this scheme

    Interference between Sentence Processing and Probabilistic Implicit Sequence Learning

    Get PDF
    During sentence processing we decode the sequential combination of words, phrases or sentences according to previously learned rules. The computational mechanisms and neural correlates of these rules are still much debated. Other key issue is whether sentence processing solely relies on language-specific mechanisms or is it also governed by domain-general principles.In the present study, we investigated the relationship between sentence processing and implicit sequence learning in a dual-task paradigm in which the primary task was a non-linguistic task (Alternating Serial Reaction Time Task for measuring probabilistic implicit sequence learning), while the secondary task were a sentence comprehension task relying on syntactic processing. We used two control conditions: a non-linguistic one (math condition) and a linguistic task (word processing task). Here we show that the sentence processing interfered with the probabilistic implicit sequence learning task, while the other two tasks did not produce a similar effect.Our findings suggest that operations during sentence processing utilize resources underlying non-domain-specific probabilistic procedural learning. Furthermore, it provides a bridge between two competitive frameworks of language processing. It appears that procedural and statistical models of language are not mutually exclusive, particularly for sentence processing. These results show that the implicit procedural system is engaged in sentence processing, but on a mechanism level, language might still be based on statistical computations
    • …
    corecore